An Overview of MPI Characteristics of Exascale Proxy Applications
نویسندگان
چکیده
The scale of applications and computing systems is tremendously increasing and needs to increase even more to realize exascale systems. As the number of nodes keeps growing, communication has become key to high performance. The Message Passing Interface (MPI) has evolved to the de facto standard for inter-node data transfers. Consequently, MPI is well suited to serve as proxy for an analysis of communication characteristics of exascale proxy applications. This work presents characteristics like time spent in certain operations, point-to-point versus collective communication, and message sizes and rates, gathered from a comprehensive trace analysis. We provide an understanding of how applications use MPI to exploit node-level parallelism, always with respect to scalability, and also locate parts which require more optimization. We emphasize on the analysis of the message matching and report queue lengths and associated matching rates. It is shown that most data is transferred via point-to-point operations, but the most time is spent in collectives. Message matching rates significantly depend on the length of message queues, which tend to increase with the number of processes. As messages are also become smaller, the matching is important to high message rates in large-scale applications.
منابع مشابه
Run-Through Stabilization: An MPI Proposal for Process Fault Tolerance
The MPI standard lacks semantics and interfaces for sustained application execution in the presence of process failures. Exascale HPC systems may require scalable, fault resilient MPI applications. The mission of the MPI Forum’s Fault Tolerance Working Group is to enhance the standard to enable the development of scalable, fault tolerant HPC applications. This paper presents an overview of the ...
متن کاملCloverLeaf: Preparing Hydrodynamics Codes for Exascale
In this work we directly evaluate five candidate programming models for future exascale applications (MPI, MPI+OpenMP, MPI+OpenACC, MPI+CUDA and CAF) using a recently developed Lagrangian-Eulerian explicit hydrodynamics mini-application. The aim of this work is to better inform the exacsale planning at large HPC centres such as AWE. Such organisations invest significant resources maintaining an...
متن کاملA Global Exception Fault Tolerance Model for MPI
Driven both by the anticipated hardware reliability constraints for exascale systems, and the desire to use MPI in a broader application space, there is an ongoing effort to incorporate fault tolerance constructs into MPI. Several faulttolerant models have been proposed for MPI [1], [2], [3], [4]. However, despite these attempts, and over six years of effort by the MPI Forum’s [5] Fault Toleran...
متن کاملNew Execution Models are Required for Big Data at Exascale
Computing on Big Data involves algorithms whose performance characteristics are fundamentally different from those of traditional scientific computing applications. Supercomputers are programmed today using execution models, such as CSP (and its primary realization, MPI), that are designed and optimized for traditional applications, but those models have weaknesses when applied to Big Data appl...
متن کاملPersonalized MPI library for Exascale Applications and Environments
Minimizing the communication costs associated with a parallel application is a key challenge for the scalability of petascale and future exascale application. This paper introduces the notion of a personalized MPI library that is customized for a particular application and platform. The work is based on the Open MPI communication library, which has a large number of runtime parameters that can ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2017